12 research outputs found
Age of Processing-Based Data Offloading for Autonomous Vehicles in Multi-RATs Open RAN
Today, vehicles use smart sensors to collect data from the road environment.
This data is often processed onboard of the vehicles, using expensive hardware.
Such onboard processing increases the vehicle's cost, quickly drains its
battery, and exhausts its computing resources. Therefore, offloading tasks onto
the cloud is required. Still, data offloading is challenging due to low latency
requirements for safe and reliable vehicle driving decisions. Moreover, age of
processing was not considered in prior research dealing with low-latency
offloading for autonomous vehicles. This paper proposes an age of
processing-based offloading approach for autonomous vehicles using unsupervised
machine learning, Multi-Radio Access Technologies (multi-RATs), and Edge
Computing in Open Radio Access Network (O-RAN). We design a collaboration space
of edge clouds to process data in proximity to autonomous vehicles. To reduce
the variation in offloading delay, we propose a new communication planning
approach that enables the vehicle to optimally preselect the available RATs
such as Wi-Fi, LTE, or 5G to offload tasks to edge clouds when its local
resources are insufficient. We formulate an optimization problem for age-based
offloading that minimizes elapsed time from generating tasks and receiving
computation output. To handle this non-convex problem, we develop a surrogate
problem. Then, we use the Lagrangian method to transform the surrogate problem
to unconstrained optimization problem and apply the dual decomposition method.
The simulation results show that our approach significantly minimizes the age
of processing in data offloading with 90.34 % improvement over similar method
Federated Learning Assisted Deep Q-Learning for Joint Task Offloading and Fronthaul Segment Routing in Open RAN
Offloading computation-intensive tasks to edge clouds has become an efficient
way to support resource constraint edge devices. However, task offloading delay
is an issue largely due to the networks with limited capacities between edge
clouds and edge devices. In this paper, we consider task offloading in Open
Radio Access Network (O-RAN), which is a new 5G RAN architecture allowing Open
Central Unit (O-CU) to be co-located with Open Distributed Unit (DU) at the
edge cloud for low-latency services. O-RAN relies on fronthaul network to
connect O-RAN Radio Units (O-RUs) and edge clouds that host O-DUs.
Consequently, tasks are offloaded onto the edge clouds via wireless and
fronthaul networks \cite{10045045}, which requires routing. Since edge clouds
do not have the same available computation resources and tasks' computation
deadlines are different, we need a task distribution approach to multiple edge
clouds. Prior work has never addressed this joint problem of task offloading,
fronthaul routing, and edge computing. To this end, using segment routing,
O-RAN intelligent controllers, and multiple edge clouds, we formulate an
optimization problem to minimize offloading, fronthaul routing, and computation
delays in O-RAN. To determine the solution of this NP-hard problem, we use Deep
Q-Learning assisted by federated learning with a reward function that reduces
the Cost of Delay (CoD). The simulation results show that our solution
maximizes the reward in minimizing CoD
Joint Communication, Computation, Caching, and Control in Big Data Multi-access Edge Computing
The concept of multi-access edge computing (MEC) has been recently introduced
to supplement cloud computing by deploying MEC servers to the network edge so
as to reduce the network delay and alleviate the load on cloud data centers.
However, compared to a resourceful cloud, an MEC server has limited resources.
When each MEC server operates independently, it cannot handle all of the
computational and big data demands stemming from the users devices.
Consequently, the MEC server cannot provide significant gains in overhead
reduction due to data exchange between users devices and remote cloud.
Therefore, joint computing, caching, communication, and control (4C) at the
edge with MEC server collaboration is strongly needed for big data
applications. In order to address these challenges, in this paper, the problem
of joint 4C in big data MEC is formulated as an optimization problem whose goal
is to maximize the bandwidth saving while minimizing delay, subject to the
local computation capability of user devices, computation deadline, and MEC
resource constraints. However, the formulated problem is shown to be
non-convex. To make this problem convex, a proximal upper bound problem of the
original formulated problem that guarantees descent to the original problem is
proposed. To solve the proximal upper bound problem, a block successive upper
bound minimization (BSUM) method is applied. Simulation results show that the
proposed approach increases bandwidth-saving and minimizes delay while
satisfying the computation deadlines
DeepAuC: Joint deep learning and auction for congestion-aware caching in Named Data Networking.
Over the last few decades, the Internet has experienced tremendous growth in data traffic. This continuous growth due to the increase in the number of connected devices and platforms has dramatically boosted content consumption. However, retrieving content from the servers of Content Providers (CPs) can increase network traffic and incur high network delay and congestion. To address these challenges, we propose a joint deep learning and auction-based approach for congestion-aware caching in Named Data Networking (NDN), which aims to prevent congestion and minimize the content downloading delays. First, using recorded network traffic data on the Internet Service Provider (ISP) network, we propose a deep learning model to predict future traffic over transit links. Second, to prevent congestion and avoid high latency on transit links, which may experience congestion in the future; we propose a caching model that helps the ISP to cache content that has a high predicted future demand. Paid-content requires payment to be downloaded and cached. Therefore, we propose an auction mechanism to obtain paid-content at an optimal price. The simulation results show that our proposal prevents congestion and increases the profits of both ISPs and CPs
Joint incentive mechanism for paid content caching and price based cache replacement policy in named data networking
Internet traffic volume is continuing to increase rapidly. Named data networking (NDN) has been introduced to support this Internet traffic growth through caching contents close to consumers. While caching in NDN is beneficial to both Internet service providers (ISPs) and content providers (CPs), ISPs serve cached contents independently without any coordination with CPs. By authorizing the ISPs to cache and distribute the contents accessible on payments, it becomes impractical for CPs to control content access and payments. In this paper, we address these challenges by proposing a joint incentive mechanism and a price-based cache replacement (PBCR) policy for paid content in NDN that improves the ISP's and CPs' profits. We use an auction theory, where the ISP earns profits from caching by alleviating traffic load on transit links and participating in contents selling. Therefore, before the ISP starts selling cached contents, it needs to cache them first. Furthermore, the ISP cache capacity is limited; therefore, we propose PBCR, where the PBCR triggers the content that needs to be replaced when the cache storage is full based on both content price and link cost. The simulation results show that our proposal increases the profits of all the network players involved in paid content caching and improves cache hit ratio.Published versio
Joint Incentive Mechanism for Paid Content Caching and Price Based Cache Replacement Policy in Named Data Networking
Internet traffic volume is continuing to increase rapidly. Named data networking (NDN) has been introduced to support this Internet traffic growth through caching contents close to consumers. While caching in NDN is beneficial to both Internet service providers (ISPs) and content providers (CPs), ISPs serve cached contents independently without any coordination with CPs. By authorizing the ISPs to cache and distribute the contents accessible on payments, it becomes impractical for CPs to control content access and payments. In this paper, we address these challenges by proposing a joint incentive mechanism and a price-based cache replacement (PBCR) policy for paid content in NDN that improves the ISP's and CPs' profits. We use an auction theory, where the ISP earns profits from caching by alleviating traffic load on transit links and participating in contents selling. Therefore, before the ISP starts selling cached contents, it needs to cache them first. Furthermore, the ISP cache capacity is limited; therefore, we propose PBCR, where the PBCR triggers the content that needs to be replaced when the cache storage is full based on both content price and link cost. The simulation results show that our proposal increases the profits of all the network players involved in paid content caching and improves cache hit ratio
Joint Communication, Computation, Caching, and Control in Big Data Multi-access Edge Computing
The concept of Multi-access Edge Computing (MEC) has been recently introduced to supplement cloud computing by deploying MEC servers to the network edge so as to reduce the network delay and alleviate the load on cloud data centers. However, compared to the resourceful cloud, MEC server has limited resources. When each MEC server operates independently, it cannot handle all computational and big data demands stemming from users' devices. Consequently, the MEC server cannot provide significant gains in overhead reduction of data exchange between users' devices and remote cloud. Therefore, joint Computing, Caching, Communication, and Control (4C) at the edge with MEC server collaboration is needed. To address these challenges, in this paper, the problem of joint 4C in big data MEC is formulated as an optimization problem whose goal is to jointly optimize a linear combination of the bandwidth consumption and network latency. However, the formulated problem is shown to be non-convex. As a result, a proximal upper bound problem of the original formulated problem is proposed. To solve the proximal upper bound problem, the block successive upper bound minimization method is applied. Simulation results show that the proposed approach satisfies computation deadlines and minimizes bandwidth consumption and network latency